22 research outputs found

    Cost analysis of object-oriented bytecode programs

    Get PDF
    AbstractCost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form

    Determining factors of regional supply of beef in Mexico, 1994-2013

    Get PDF
    Se utilizaron cinco regiones productoras de bovinos carne en México, periodo 1994-2013, para estimar modelos econométricos lineales multivariables, cuya variable dependiente fue la oferta. Se calcularon las elasticidades respectivas para cada modelo en cada región, enfatizando sobre el efecto mayor de cada una de ellas sobre la oferta de carne regional de esa especie pecuaria

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Abstract An Integration of Partial Evaluation in a Generic Abstract Interpretation Framework

    No full text
    Information generated by abstract interpreters has long been used to perform program specialization. Additionally, if the abstract interpreter generates a multivariant analysis, it is also possible to perform multiple specialization. Information about values of variables is propagated by simulating program execution and performing xpoint computations for recursive calls. In contrast, traditional partial evaluators (mainly) use unfolding for both propagating values of variables and transforming the program. It is known that abstract interpretation is a better technique for propagating success values than unfolding. However, the program transformations induced by unfolding may lead to important optimizations which are not directly achievable in the interpretation. The aim of this work is to devise a specialization framework which integrates the better information propagation of abstract interpretation with the powerful program transformations performed by partial evaluation, and which can be implemented via small modi cations to existing generic abstract interpreters. With this aim, we will relate top-down abstract interpretation with traditional concepts in partial evaluation and sketch how the sophisticated techniques developed for controlling partial evaluation can be adapted to the proposed specialization framework. We conclude that there can be both practical and conceptual advantages in the proposed integration of partial evaluation and abstract interpretation

    Preface Volume 30, Issue 4

    Get PDF
    AbstractThe characteristics of logic programming languages both necessitate and motivate innovative approaches in the areas of user environments and programming tools. The Tenth Workshop on Logic Programming Environments provided a forum for researchers, developers, programmers, and users to exchange ideas and results on all aspects of environments for logic programming. Contributions coming from Europe, the USA, Australia and Japan addressed debugging (4x), vizualization, graphical spreadsheets, and automated documentation. Coming from a different background, Clinton Jeffery, invited speaker, presented a monitoring environment for C which, we hope, was inspiring for the logic programming community.A special thanks to Enrico Pontelli, ICLP'99 workshop coordinator. Thanks also to Francisco Bueno, Manuel Carro, Erwan Jahier, Sarah Mallet, Julio Marino, and Olivier Ridoux who helped with paper reviewing

    Preface

    No full text
    corecore